Communication enables agents to cooperate to achieve their goals. Learning when to communicate, i.e., sparse (in time) communication, and whom to message is particularly important when bandwidth is limited. Recent work in learning sparse individualized communication, however, suffers from high variance during training, where decreasing communication comes at the cost of decreased reward, particularly in cooperative tasks. We use the information bottleneck to reframe sparsity as a representation learning problem, which we show naturally enables lossless sparse communication at lower budgets than prior art. In this paper, we propose a method for true lossless sparsity in communication via Information Maximizing Gated Sparse Multi-Agent Communication (IMGS-MAC). Our model uses two individualized regularization objectives, an information maximization autoencoder and sparse communication loss, to create informative and sparse communication. We evaluate the learned communication `language' through direct causal analysis of messages in non-sparse runs to determine the range of lossless sparse budgets, which allow zero-shot sparsity, and the range of sparse budgets that will inquire a reward loss, which is minimized by our learned gating function with few-shot sparsity. To demonstrate the efficacy of our results, we experiment in cooperative multi-agent tasks where communication is essential for success. We evaluate our model with both continuous and discrete messages. We focus our analysis on a variety of ablations to show the effect of message representations, including their properties, and lossless performance of our model.
translated by 谷歌翻译
The decarbonization of buildings presents new challenges for the reliability of the electrical grid as a result of the intermittency of renewable energy sources and increase in grid load brought about by end-use electrification. To restore reliability, grid-interactive efficient buildings can provide flexibility services to the grid through demand response. Residential demand response programs are hindered by the need for manual intervention by customers. To maximize the energy flexibility potential of residential buildings, an advanced control architecture is needed. Reinforcement learning is well-suited for the control of flexible resources as it is able to adapt to unique building characteristics compared to expert systems. Yet, factors hindering the adoption of RL in real-world applications include its large data requirements for training, control security and generalizability. Here we address these challenges by proposing the MERLIN framework and using a digital twin of a real-world 17-building grid-interactive residential community in CityLearn. We show that 1) independent RL-controllers for batteries improve building and district level KPIs compared to a reference RBC by tailoring their policies to individual buildings, 2) despite unique occupant behaviours, transferring the RL policy of any one of the buildings to other buildings provides comparable performance while reducing the cost of training, 3) training RL-controllers on limited temporal data that does not capture full seasonality in occupant behaviour has little effect on performance. Although, the zero-net-energy (ZNE) condition of the buildings could be maintained or worsened as a result of controlled batteries, KPIs that are typically improved by ZNE condition (electricity price and carbon emissions) are further improved when the batteries are managed by an advanced controller.
translated by 谷歌翻译
The current reinforcement learning algorithm uses forward-generated trajectories to train the agent. The forward-generated trajectories give the agent little guidance, so the agent can explore as much as possible. While the appreciation of reinforcement learning comes from enough exploration, this gives the trade-off of losing sample efficiency. The sampling efficiency is an important factor that decides the performance of the algorithm. Past tasks use reward shaping techniques and changing the structure of the network to increase sample efficiency, however these methods require many steps to implement. In this work, we propose novel reverse curriculum reinforcement learning. Reverse curriculum learning starts training the agent using the backward trajectory of the episode rather than the original forward trajectory. This gives the agent a strong reward signal, so the agent can learn in a more sample-efficient manner. Moreover, our method only requires a minor change in algorithm, which is reversing the order of trajectory before training the agent. Therefore, it can be simply applied to any state-of-art algorithms.
translated by 谷歌翻译
The emergence of large pretrained models has enabled language models to achieve superior performance in common NLP tasks, including language modeling and question answering, compared to previous static word representation methods. Augmenting these models with a retriever to retrieve the related text and documents as supporting information has shown promise in effectively solving NLP problems in a more interpretable way given that the additional knowledge is injected explicitly rather than being captured in the models' parameters. In spite of the recent progress, our analysis on retriever-augmented language models shows that this class of language models still lack reasoning over the retrieved documents. In this paper, we study the strengths and weaknesses of different retriever-augmented language models such as REALM, kNN-LM, FiD, ATLAS, and Flan-T5 in reasoning over the selected documents in different tasks. In particular, we analyze the reasoning failures of each of these models and study how the models' failures in reasoning are rooted in the retriever module as well as the language model.
translated by 谷歌翻译
Quantum state tomography aims to estimate the state of a quantum mechanical system which is described by a trace one, Hermitian positive semidefinite complex matrix, given a set of measurements of the state. Existing works focus on estimating the density matrix that represents the state, using a compressive sensing approach, with only fewer measurements than that required for a tomographically complete set, with the assumption that the true state has a low rank. One very popular method to estimate the state is the use of the Singular Value Thresholding (SVT) algorithm. In this work, we present a machine learning approach to estimate the quantum state of n-qubit systems by unrolling the iterations of SVT which we call Learned Quantum State Tomography (LQST). As merely unrolling SVT may not ensure that the output of the network meets the constraints required for a quantum state, we design and train a custom neural network whose architecture is inspired from the iterations of SVT with additional layers to meet the required constraints. We show that our proposed LQST with very few layers reconstructs the density matrix with much better fidelity than the SVT algorithm which takes many hundreds of iterations to converge. We also demonstrate the reconstruction of the quantum Bell state from an informationally incomplete set of noisy measurements.
translated by 谷歌翻译
Automation in farming processes is a growing field of research in both academia and industries. A considerable amount of work has been put into this field to develop systems robust enough for farming. Terrace farming, in particular, provides a varying set of challenges, including robust stair climbing methods and stable navigation in unstructured terrains. We propose the design of a novel autonomous terrace farming robot, Aarohi, that can effectively climb steep terraces of considerable heights and execute several farming operations. The design optimisation strategy for the overall mechanical structure is elucidated. Further, the embedded and software architecture along with fail-safe strategies are presented for a working prototype. Algorithms for autonomous traversal over the terrace steps using the scissor lift mechanism and performing various farming operations have also been discussed. The adaptability of the design to specific operational requirements and modular farm tools allow Aarohi to be customised for a wide variety of use cases.
translated by 谷歌翻译
Syntax is a latent hierarchical structure which underpins the robust and compositional nature of human language. An active line of inquiry is whether large pretrained language models (LLMs) are able to acquire syntax by training on text alone; understanding a model's syntactic capabilities is essential to understanding how it processes and makes use of language. In this paper, we propose a new method, SSUD, which allows for the induction of syntactic structures without supervision from gold-standard parses. Instead, we seek to define formalism-agnostic, model-intrinsic syntactic parses by using a property of syntactic relations: syntactic substitutability. We demonstrate both quantitative and qualitative gains on dependency parsing tasks using SSUD, and induce syntactic structures which we hope provide clarity into LLMs and linguistic representations, alike.
translated by 谷歌翻译
我们提供了一种差异化私有算法,用于同时生成多个任务的合成数据:边际查询和多任务机器学习(ML)。我们算法中的一个关键创新是能够直接处理数值特征的能力,与许多相关的先验方法相反,这些方法需要首先通过{binning策略}将数值特征转换为{高基数}分类特征。为了提高准确性,需要较高的分子粒度,但这会对可伸缩性产生负面影响。消除对套在一起的需求使我们能够产生合成数据,以保留大量统计查询,例如数值特征的边际和条件线性阈值查询。保留后者意味着在特定半空间上方的每个类标记的点的比例在实际数据和合成数据中都大致相同。这是在多任务设置中训练线性分类器所需的属性。我们的算法还使我们能够为混合边缘查询提供高质量的合成数据,这些数据结合了分类和数值特征。我们的方法始终比最佳可比技术快2-5倍,并在边缘查询和混合型数据集的线性预测任务方面提供了显着的准确性改进。
translated by 谷歌翻译
现实世界的面部表达识别(FER)数据集遭受吵闹的注释,由于众包,表达式的歧义,注释者的主观性和类间的相似性。但是,最近的深层网络具有强大的能力,可以记住嘈杂的注释导致腐蚀功能嵌入和泛化不良的能力。为了处理嘈杂的注释,我们提出了一个动态FER学习框架(DNFER),其中根据训练过程中的动态类特定阈值选择了干净的样品。具体而言,DNFER基于使用选定的干净样品和使用所有样品的无监督培训的监督培训。在训练过程中,每个微型批次的平均后类概率被用作动态类特异性阈值,以选择干净的样品进行监督训练。该阈值与噪声率无关,与其他方法不同,不需要任何干净的数据。此外,要从所有样品中学习,使用无监督的一致性损失对齐弱调节图像和强大图像之间的后验分布。我们证明了DNFER在合成和实际噪声注释的FER数据集(如RaFDB,Ferplus,Sfew和altimpnet)上的鲁棒性。
translated by 谷歌翻译
在这项工作中,我们研究了解决强化学习问题的基于政策的方法,其中采用了非政策性采样和线性函数近似进行政策评估,以及包括自然政策梯度(NPG)在内的各种政策更新规则,用于政策更新。为了在致命三合会的存在下解决政策评估子问题,我们提出了一个通用算法的多步型TD学习框架,具有广义的重要性抽样比率,其中包括两个特定的算法:$ \ lambda $ Q Q $ Q Q $ - 跟踪和双面$ Q $ - 跟踪。通用算法是单个时间尺度,具有可证明的有限样本保证,并克服了非政策学习中的高方差问题。至于策略更新,我们仅使用Bellman操作员的收缩属性和单调性属性提供通用分析,以在各种策略更新规则下建立几何融合。重要的是,通过将NPG视为实施政策迭代的近似方法,我们在不引入正则化的情况下建立了NPG的几何融合,并且不使用现有文献中的镜像下降类型的分析类型。将策略更新的几何融合与策略评估的有限样本分析相结合,我们首次建立了整​​体$ \ Mathcal {o}(\ Epsilon^{ - 2})$样本复杂性以找到最佳策略(最多达到函数近似误差)使用基于策略的方法和线性函数近似下的基于策略的方法。
translated by 谷歌翻译